专利摘要:
1. TECHNICAL FIELD OF THE INVENTION The present invention relates to a stereo image parallax map fusion method, a 3D image display method using the same, and a computer readable recording medium recording a program for realizing the method. 2. The technical problem to be solved by the invention According to the present invention, accuracy of parallax maps is selected by selecting an appropriate parallax based on a rule using edge information, parallax map statistical characteristics, etc. in a region-based matching and a feature-based matching parallax map obtained by stereo matching of a digital stereoscopic image. To provide a stereo image parallax map fusion method for increasing the number of times, a three-dimensional image display method using the same and a computer-readable recording medium recording a program for realizing the method. 3. Summary of Solution to Invention A stereo image parallax map fusion method applied to a stereo image parallax map fusion device, comprising: a first step of initializing an index for indicating a position in an image; A second step of checking whether the reference image pixel at the position indicated by the index value is an edge image; A third step of performing edge pixel fusion, moving to a next position, and proceeding to the second step if it is an edge image as a result of the checking of the second step; And a fourth step of performing non-edge pixel fusion, moving to a next position, and proceeding to the second step if the edge image is not confirmed as the second step. 4. Important uses of the invention The present invention is used in multimedia systems and the like.
公开号:KR20020095752A
申请号:KR1020010033943
申请日:2001-06-15
公开日:2002-12-28
发明作者:엄기문;허남호;김형남
申请人:한국전자통신연구원;
IPC主号:
专利说明:

Method for Stereo Image Disparity Map Fusion And Method for Display 3-Dimension Image By Using it}
[13] The present invention relates to a stereo image parallax map fusion method, a three-dimensional image display method using the same, and a computer-readable recording medium recording a program for realizing the method. In particular, an area-based matching method and a feature-based matching method are used. The two disparity maps obtained by matching the digital stereo left and right images by a certain rule to solve the shortcomings of the results of the two matching methods, and to obtain a more accurate disparity map, a stereo image disparity map fusion method and a three-dimensional image display method using the same; A computer readable recording medium having recorded thereon a program for realizing the method.
[14] When one of stereo images acquired from two left and right cameras is used as a reference image and another image is placed as a search image, the difference in image coordinates between the reference image and the search image for the same point in space in these two images is parallax. Many studies have been conducted in the field of stereo vision to extract this parallax or improve the parallax accuracy.
[15] In order to obtain parallax from a stereo image, a corresponding point corresponding to an arbitrary point of the reference image needs to be found in the search image. There are two conventional technologies for this.
[16] The first is the area based method.
[17] This method calculates the similarity between the pixels in the window frame of constant size centered on the pixels of the reference image and the pixels in the window frame of the same size centered on the pixels of the searched image considered to be corresponding points. It is a method of calculating parallax by defining the pixel to have as a corresponding point.
[18] Examples of this method include "Korean Patent Publication No. 1999-0080598 (Method for Measuring Similarity Using Number of Matched Pixels and Apparatus for Implementing It, Kim Hyung-gon and Two Others)" and Japanese Patent Publication "P2000-356514 (Distance Measurement Using Stereo Camera) Method and distance measuring device, Toshiba Corporation).
[19] The second method is feature based.
[20] This method extracts matching primitives such as edges and zero crossings from reference images and search images, and calculates the parallax by selecting corresponding matching feature pairs by calculating similarities between the features.
[21] Examples of this method include Japanese Patent Laid-Open No. P2000-172884 (Method of Creating Three-Dimensional Model, Myungjeon Co., Ltd.), and Paper "Hierachical Waveform Matching: A New Feature-based Stereo Technique (DM McKeown and Y. Hsieh, Proceeding of the Conference). on Computer Vision and Pattern Recognition, Urban Champlain, III, June 16-18, 1992).
[22] However, none of these methods can be said to have the absolute best performance, and shows different performance depending on the type of image or the position of the matching pixel.
[23] For example, the region-based matching method calculates the parallaxes for all the pixels whose parallax changes continuously, and it takes a long calculation time and generates a matching error in the parallax calculation of regions where parallax discontinuities have many similar brightness. I have a problem.
[24] In addition, the feature-based matching method calculates the parallax only in the region where there is a matching feature, so that the computation time is short and the parallax of the disparity discontinuity point can be obtained relatively accurately, whereas the parallax of all pixels cannot be obtained. There is a problem that it is necessary.
[25] Therefore, in recent years, various methods have been proposed to supplement the advantages and disadvantages of these two methods, and one of them is a method for obtaining more accurate disparity maps by fusing the results of the two matches by a certain rule. An example of this is "D. M. McKeown, Jr." Et al. "Refinement of Disparity Estimates Through the Fusion of Monocular Image Segmentations (Proceedings of Computer Vision and Pattern Recognition, pp. 486-492, 1992)", "Hseih" et al. "Performance Evaluation of Scene Registration and Stereo Matching for Cartographic Feature Extraction (IEEE Trans. On Pattern Analysis and Machine Intelligence, vol. 14, no. 2, Feb. 1992), and "Refinement of Disparity Map using the Rule-based Fusion of Area and Feature-" based Matching Results (Proceedings of International Symposium on Remote Sensing, pp. 304-309, Nov. 1999).
[26] Of these, "D. M. McKeown, Jr." The proposed method divides the reference image into several regions by using various image segmentation algorithms, calculates the disparity histograms of two disparity maps within each region, and then calculates the disparity having a large disparity frequency value. How to choose.
[27] However, this method has a problem in that the result may vary according to the result of the region division, and the wrong parallax may be selected at the disparity discontinuity point where many mismatches or mismatches occur.
[28] In addition, the method proposed by "Hseih" et al. Is a method of the reference image and the search image in a window frame of a predetermined size centered on two pixels obtained from the point matched by the parallax among the two parallaxes obtained by the region-based matching and the feature-based matching. The difference in brightness is calculated by selecting the difference between two parallaxes.
[29] However, this method has a problem in that an incorrect parallax may be selected when the brightness difference between the reference image and the search image is large or when there is a shielded area despite the advantage of simplicity of implementation.
[30] In addition, the method proposed by Um Ki-mun et al. Uses parametric images such as edge images and statistical characteristics such as standard deviation of each disparity map within a window frame of a constant size (3 × 3) to compensate for the shortcomings of the methods. How to choose.
[31] However, this method has a problem that the actual fusion is performed only at the edge portion.
[32] The present invention has been proposed to solve the problems described above, using edge information, statistical characteristics of parallax map, etc. in region-based matching and feature-based matching disparity map obtained by stereo matching of digital stereoscopic images. Provided are a stereo image parallax map fusion method for enhancing the accuracy of a parallax map by selecting an appropriate parallax according to a rule, a 3D image display method using the same, and a computer-readable recording medium recording a program for realizing the method. The purpose is.
[1] 1 is a configuration diagram of an embodiment of a three-dimensional image display system to which the present invention is applied.
[2] 2 is a flowchart illustrating a three-dimensional image display method according to the present invention;
[3] 3 is a flowchart illustrating a stereo image disparity map fusion method according to the present invention;
[4] 4 is a flowchart illustrating an operation of an edge pixel fusion process in the stereo image parallax map fusion method according to the present invention;
[5] FIG. 5 is a flowchart illustrating an exemplary operation of validating parallax among edge pixel fusion processes in the stereo image parallax map fusion method according to the present invention; FIG.
[6] 6 is a flowchart illustrating an embodiment of a non-edge pixel fusion process in the stereo image parallax map fusion method according to the present invention;
[7] * Explanation of symbols for the main parts of the drawings
[8] 11-1: Left Camera 11-2: Right Camera
[9] 12-1: Left edge extracting unit 12-2: Right edge extracting unit
[10] 13 region-based matching unit 14 feature-based matching unit
[11] 15: fusion processing unit 16: interpolation unit
[12] 17: parallax map recording unit 18: 3D image display unit
[33] According to an aspect of the present invention, there is provided a stereo image disparity map fusion method applied to a stereo image parallax map fusion device, comprising: a first step of initializing an index for indicating a position in an image; A second step of checking whether the reference image pixel at the position indicated by the index value is an edge image; A third step of performing edge pixel fusion, moving to a next position, and proceeding to the second step if it is an edge image as a result of the checking of the second step; And a fourth step of performing a non-edge pixel fusion, moving to a next position, and proceeding to the second step if the edge image is not confirmed as the second step.
[34] In addition, the present invention, in the three-dimensional image display method using a stereo image parallax map fusion method applied to the three-dimensional image display device, the left and right images are respectively input from the left and right image input device, the left and right edge image from the input image Extracting the first step; Selecting a parallax according to characteristics of edges of the pixels and characteristics of the parallax map using an area-based matched parallax map and a feature-based matched parallax map obtained by inputting the left and right images and the left and right edge images; A third step of recording the selected parallax into a digital image; And a fourth step of image displaying the parallax map in a three-dimensional model.
[35] Meanwhile, the present invention provides a video parallax map fusion device having a processor, comprising: a first function of initializing an index for displaying a position in an image; A second function of checking whether a reference image pixel at a position indicated by the index value is an edge image; A third function of performing edge pixel fusion, moving to a next position, and proceeding to the second function if an edge image is confirmed by the second function; And a computer-readable recording medium having recorded thereon a program for realizing a fourth function of performing non-edge pixel fusion and moving to the next position and proceeding to the second function as a result of the confirmation by the second function. Include.
[36] The present invention also provides a three-dimensional image display device having a processor, comprising: a first function of receiving left and right images respectively from a left and right image input apparatus and extracting left and right edge images from the received images; A second function of selecting a parallax according to characteristics of an edge of a pixel and a parallax map using an area-based matching parallax map and a feature-based matching parallax map obtained by inputting the left and right images and the left and right edge images; A third function of adding the selected disparity and recording the digital image; And a computer-readable recording medium having recorded thereon a program for realizing a fourth function of displaying a parallax map as a three-dimensional model.
[37] The above objects, features and advantages will become more apparent from the following detailed description taken in conjunction with the accompanying drawings. Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.
[38] 1 is a configuration diagram of an embodiment of a three-dimensional image display system to which the present invention is applied.
[39] As shown in FIG. 1, the three-dimensional image display system includes edge extracting units 12-1 and 12-2, respectively, for extracting reference images and search images from two cameras 11-1 and 11-2. The area-based matching unit 13 and the feature-based matching unit 14 are inputted, and the edge image and the parallax map outputted from the respective outputs are input again to the fusion processing unit 16 so that the presence or absence of parallax in the parallax map Selects an appropriate parallax using a rule set according to the above, and optionally, interpolates the parallax map interpolated through an interpolation unit 15 for obtaining parallax of an unmatched pixel at the front end of the fusion processor 16. And a parallax map recording unit 17 which finally records the parallax obtained for each pixel as a digital image, and a three-dimensional image display unit 28 for synthesizing the parallax map and the reference image and displaying the three-dimensional image.
[40] Here, the edge images extracted by the edge extractors 12-1 and 12-2 are referred to as the region-based matching unit 13 and the feature-based matching unit 14 according to the region-based matching method and the feature-based matching method used. And it can be used as an input along with the search image.
[41] The operation of the 3D image display system to which the present invention configured as described above is applied will now be described with reference to FIG. 2.
[42] 2 is a flowchart illustrating a three-dimensional image display method according to the present invention.
[43] As shown in FIG. 2, first, each image obtained from the left and right cameras 11-1 and 11-2 is input to the edge extractor 12, the area-based matcher 13, and the feature-based matcher 14. In addition, the edge extractor 12 extracts an edge image from each of the images acquired from the left and right cameras 11-1 and 11-2 by using a "Prewitt" or "Canny" edge extraction operator (201), based on a region. Area-based matching, input to the matching unit 13 and the feature-based matching unit 14, and receiving the images acquired from the left and right cameras 11-1 and 11-2 and the edge image acquired from the edge extractor 12. Each of the disparity maps is acquired by the unit 13 and the feature-based matching unit 14 (202).
[44] The interpolation unit 15 receives the parallax map and the edge image based on the feature-based matching again and performs interpolation on the non-edge pixels of the parallax map by the feature-based matching. (203).
[45] The fusion processor 16 first performs parallax using each disparity map acquired by the region-based matcher 13 and the feature-based matcher 14 and the edge image obtained by the edge extractor 12. Pixels to be calculated are classified into edge pixels and non-edge pixels, and the classified pixels select appropriate parallaxes according to predetermined rules, respectively (204).
[46] In addition, the parallax map recording unit 17 records the parallax of each selected pixel as a digital image (205), and the 3D image display unit 18 synthesizes the reference image and the parallax map by using the 3D graphic to 3D. An image is displayed (206).
[47] 3 is a flowchart illustrating a stereo image disparity map fusion method according to the present invention.
[48] Here, the description will be made with reference to i and j as indexes for indicating positions in the image. In this case, i varies from "0" to "row direction maximum size-1 of the image" as the row position coordinate index, and j varies from "0" to "maximum size of column direction-1 of the image" as the column position coordinate index.
[49] First, i and j, which are indexes for indicating a position in an image, are initialized to 0 (301).
[50] Then, it is checked whether Pref, i, j is an edge from the pixel having coordinates (0, 0) of i and j (302). Here, Pref, i, j means one pixel of the reference image (left image) whose coordinates are given by (i, j), and the part for checking whether the edge is an edge pixel by examining the brightness value of the corresponding position pixel in the edge image. Will be judged. In the normal case, the edge image allocates 255 brightness in the case of the edge pixel, and 0 brightness in the other case.
[51] As a result of the check, edge pixel fusion is performed in the case of an edge (303), and parallax of the non-edge pixel of the feature-based matching parallax map is interpolated (304), and non-edge pixel fusion is performed (305). ).
[52] As described above, after performing edge pixel fusion or non-edge pixel fusion on one pixel and then moving to the next pixel, the entire process (302-305) is repeated for all pixels (306-309).
[53] The basic assumption of the parallax selection rule proposed in the present invention below is that in the case of the edge partial pixel, the edge pixel is matched and the same directionality as the vertical pixel and the horizontal pixel must match. In addition, parallax is assumed assuming that the feature-based matching result is more accurate than the region-based matching result in the case of the vertical edge pixel, and that the area-based matching result is more accurate than the feature-based matching result in the case of the horizontal edge partial pixel. Will be chosen.
[54] 4 is a flowchart illustrating an operation of an edge pixel fusion process in the stereo image parallax map fusion method according to the present invention.
[55] First, referring to the variables used in this figure, Ptarget, i, j, area is matched in the target image obtained from parallax of Pref, i, j obtained by the area-based matching method. Ptarget, i, j, ftr means a matching point in a target image obtained from parallax of Pref, i, j obtained by a feature-based matching method.
[56] In addition, dnew, i, j means the parallax of Pref, i, j newly obtained by fusion, darea, i, j is the parallax obtained by the area-based matching method, dftr, i, j is a feature-based matching method It is the parallax of Pref, i, j obtained by.
[57] In the process, since Pref, i, j to be originally matched is an edge pixel, the matching point should also be an edge pixel. Accordingly, since the probability that the point which is the edge pixel among the matching points obtained by the two matching methods is the exact match point is high, it is checked whether Ptarget, i, j, area and Ptarget, i, j, ftr are both edge pixels.
[58] If, as a result of the check, Ptarget, i, j, area and Ptarget, i, j, ftr are both edge pixels, Pref, i, j and Ptarget, i, j, area, Pref, i, j and Ptarget, It is checked whether the edge directions of i, j, ftr are the same (404). In this case, when the edge pixel is an edge pixel, the edge direction of the pixel may be determined by the direction of the line element (connected edge) to which the pixel belongs or by the gradient value of the brightness change of the pixel. It is used to select the two points with high similarity in direction as matching points by comparing the edge directions of the two points.
[59] If the edge directions of Pref, i, j and Ptarget, i, j, area are the same, and the edge directions of Pref, i, j and Ptarget, i, j, ftr are the same, whether the directions of Pref, i, j are horizontal If it is horizontal (407), darea, i, j is dnew, i, j (413), and if the direction of Pref, i, j is horizontal (407), dftr, i, j Is dnew, i, j (412).
[60] Here, the reason for confirming whether the direction is horizontal is, in the present invention, the determination of the direction similarity determines a criterion for dividing the horizontal direction from the vertical direction, and according to this criterion is performed only whether the horizontal pixel or vertical pixel. In the case where both points of similarity and direction are similar in both results, or when the two matched points are not edge pixels, the result of the horizontal edge matching generally has a higher probability of error in the feature-based matching than the area-based matching. The reason for this is to select a region-based matching result in the horizontal direction and to select a feature-based matching result in the horizontal direction.
[61] If only the edge directions of Pref, i, j and Ptarget, i, j, ftr are the same (405), dftr, i, j is dnew, i, j (411). In addition, if only the edge directions of Pref, i, j and Ptarget, i, j, area are the same (406), a matching point is found through a process of examining the validity of parallax (414). Also, if the directions are all different, check whether the directions of Pref, i, j are horizontal, and if horizontal (408), darea, i, j is dnew, i, j (410), and the direction of Pref, i, j If it is not horizontal (408), dftr, i, j is dnew, i, j (411).
[62] If Ptarget, i, j, area is not an edge or Ptarget, i, j, ftr is not an edge (401), and if Ptarget, i, j, ftr is an edge (402), dftr, i, j Let dnew, i, j (409). If Ptarget, i, j, area is an edge (403), a matching point is found through a process of examining the validity of parallax (414). If all of the edges are not edges, it is checked whether the direction of Pref, i, j is horizontal (step 408), and if darea, i, j is dnew, i, j (410), the pref, i, j If it is not horizontal (408), dftr, i, j is dnew, i, j (411).
[63] FIG. 5 is a flowchart illustrating an operation of validating a parallax during an edge pixel fusion process in the stereo image parallax map fusion method according to the present invention.
[64] First, the description of each major variable used is as follows.
[65] First, avgarea, i, j is the mean time difference of the area-based matching parallax in the 3X3 window frame for darea, i, j, and stdarea, i, j is the area-based matching parallax in the 3X3 window frame for darea, i, j Mean standard deviation of the time difference of the map.
[66] Also, diffi, j means the difference between the absolute values of the parallax darea, i, j obtained by region-based matching and the average parallax avgarea, i, j in the 3 × 3 window frame, where darea, i, j is the peripheral parallaxes. It is used to check whether the point is significantly different when compared with, and if the difference is large, it is highly likely that it is a mismatched point.
[67] In addition, Th1 is a threshold for determining a difference between diffi, j and stdarea, i, j, and Th2 is a diffi, j, that is, for determining a difference between area-based parallax darea, i, j and avgarea, i, j Threshold.
[68] Looking at the operation, first calculates avgarea, i, j and stdarea, i, j (501).
[69] If diffi, j is compared with the stdarea, i, j value and the difference is less than the threshold Th1 (502), diffi, j is compared with the smaller value of the threshold Th2 and stdarea, i, j value (503) diffi, If j is small, it is determined that there is a probability of valid parallax (504). Otherwise, dftr, i, j is selected as dnew, i, j (505). Here, the process of 503 is to more reliably investigate the accuracy of darea, i, j, which has a higher probability of accurate parallax in the process of 502.
[70] In addition, if diffi, j is compared with stdarea, i, j and the difference is not smaller than the threshold Th1 (502), darea, i, j is judged to be an incorrect parallax and parallax dftr, i, j by feature-based matching Is selected as dnew, i, j (505).
[71] 6 is a flowchart illustrating an embodiment of a non-edge pixel fusion process in the stereo image parallax map fusion method according to the present invention.
[72] First, each of the main variables used in FIG. 6 will be described. Histarea [d], Histftr [d], and Histmax [d] are region-based registration in a 3 × 3 window frame centered on (i, j), It shows the histogram of parallax d and the parallax d with the maximum frequency count calculated on the parallax map obtained by feature-based matching.
[73] In the present invention, parallax interpolation is performed on non-edge pixels of a feature-based matched parallax map (304), and then fusion is performed on non-edge pixels. In order to fuse the non-edge pixels, first, a 3 × 3 window frame to be subjected to non-edge pixels is set (601).
[74] Then, the edge image is irradiated from the edge image to determine whether the remaining eight pixels (the surrounding pixels in the window frame) except the center pixel (i, j) are set within the set 3x3 window frame (602). Then, the next pixel in the window frame (603) and whether the edge pixel is checked again (602). In general, since there are many parts where disparity changes occur discontinuously, the parallax distribution varies with the edge pixels. Therefore, the edge pixel is excluded in order to select only those pixels in which the parallax variation is relatively gentle.
[75] And, except for the edge pixel above. By calculating the histogram of each disparity map (604), the parallax having the maximum frequency in each of the two disparity maps is obtained, and the parallax having the maximum frequency is examined again, and if the disparity having the maximum frequency is the disparity due to the area-based matching 605, darea, i, j is selected as dnew, i, j (606), otherwise parallax dftr, i, j by feature-based matching is selected as dnew, i, j (607).
[76] As described above, the method of the present invention may be implemented as a program and stored in a recording medium (CD-ROM, RAM, ROM, floppy disk, hard disk, magneto-optical disk, etc.) in a computer-readable form.
[77] The present invention described above is not limited to the above-described embodiments and the accompanying drawings, and various substitutions, modifications, and changes are possible in the technical field of the present invention without departing from the technical spirit of the present invention. It will be clear to those of ordinary knowledge.
[78] As described above, the present invention has the effect of obtaining a more accurate disparity map by fusing two matching results through appropriate rules to compensate for the disadvantages of the conventional area-based matching system or method and the feature-based matching system or method. have.
[79] In addition, the present invention provides a more accurate disparity by adaptively applying a different rule according to the pixel type to solve the problem caused by the conventional parallax fusion method applying one rule irrespective of the pixel type or only the rule for the edge pixel. It has the effect of obtaining a map.
权利要求:
Claims (13)
[1" claim-type="Currently amended] A stereo image parallax map fusion method applied to a stereo image parallax map fusion device,
A first step of initializing an index for indicating a position in the image;
A second step of checking whether the reference image pixel at the position indicated by the index value is an edge image;
A third step of performing edge pixel fusion, moving to a next position, and proceeding to the second step if it is an edge image as a result of the checking of the second step; And
As a result of the checking of the second step, if the edge image is not the fourth step of non-edge pixel fusion and moving to the next position to proceed to the second step
Stereo image parallax map fusion method comprising a.
[2" claim-type="Currently amended] The method of claim 1,
The edge pixel fusion process of the third step,
A fifth step of checking whether the matching point of the target image by the region-based matching method and the matching point of the target image by the feature-based matching method are edge images;
A sixth step of selecting a parallax by a feature-based matching method when the matching point of the target image by the feature-based matching method is an edge image as a result of the checking of the fifth step;
A seventh step of verifying validity of the parallax when the matching point of the target image by the region-based matching method is an edge image as a result of the checking of the fifth step;
As a result of the checking of the fifth step, when it is not an edge image, the direction of the reference image pixel is checked to select a parallax based on the region-based matching method if it is horizontal, and to select a parallax based on the feature-based matching method if it is not horizontal. Eighth step; And
As a result of the checking of the fifth step, if the edge image is selected, parallax is selected according to a matching point of the target image by the feature-based matching method, a matching point of the target image by the region-based matching method, and a direction of the reference image pixel. 9th step
Stereo image parallax map fusion method comprising a.
[3" claim-type="Currently amended] The method of claim 2,
The ninth step,
A tenth step of checking whether the matching point of the target image by the feature-based matching method and the reference point of the reference image pixel and the target image by the region-based matching method are the same as the directions of the reference image pixel;
An eleventh step of selecting a parallax by a feature-based matching method when the matching point of the target image and the reference image pixel are in the same direction as each other as a result of the checking of the tenth step;
A twelfth step of selecting a disparity by verifying validity of the disparity when the matching point of the target image and the reference image pixel are in the same direction as the result of the checking of the tenth step;
When the matching point of the target image by the feature-based matching method, the reference image pixel and the matching point of the target image by the region-based matching method, and the direction of the reference image pixel are different as a result of the checking in the tenth step, the reference Checking a direction of an image pixel to select a parallax based on the region-based matching method if horizontal, and selecting a parallax based on the feature-based matching method if not horizontal; And
The reference point when the matching point of the target image by the feature-based matching method, the reference image pixel and the matching point of the target image by the region-based matching method, and the direction of the reference image pixel are the same. A fourteenth step of checking a direction of an image pixel and selecting a parallax based on the region-based matching method if it is horizontal;
Stereo image parallax map fusion method comprising a.
[4" claim-type="Currently amended] The method of claim 2 or 3,
The process of selecting the parallax by verifying the validity of the parallax,
Calculating a standard deviation between the mean parallax of the area-based matched parallax map and the parallax of the area-based matched parallax map in the predetermined area;
Comparing the standard deviation of the parallax of the region-based matching parallax with the absolute value difference between the parallax obtained by the region-based matching and the average parallax in the predetermined region, and if the difference is greater than or equal to a first threshold, the feature-based matching method. A sixteenth step of selecting a parallax by; And
Comparing the standard deviation of the parallax of the area-based matching parallax with the absolute difference between the parallax obtained by the area-based matching and the mean parallax in the predetermined area, and if the difference is less than the first threshold, the area-based The parallax obtained by the area-based matching by comparing the difference between the absolute value of the parallax obtained by the matching and the mean parallax in the predetermined area with a smaller value among the standard deviation and the second threshold for the parallax of the area-based matching parallax map. And a seventeenth step of selecting a time difference based on a region-based matching method if the absolute difference between the average time differences within the predetermined region is small, and selecting a time difference based on a feature-based matching method if the difference is large or equal.
Stereo image parallax map fusion method comprising a.
[5" claim-type="Currently amended] The method according to any one of claims 1 to 3,
The non-edge pixel fusion process of the fourth step,
A fifteenth step of interpolating a parallax of a non-edge pixel of the feature-based matched parallax map;
Setting a predetermined area and calculating a histogram of parallax by region ground matching and a histogram of parallax by feature-based matching for pixels excluding edge pixels in the area;
A seventeenth step of examining a histogram having a maximum frequency among the histograms of the parallaxes by the region ground matching and the histograms of the parallaxes by the feature-based matching; And
If the histogram having the maximum frequency is the histogram of the parallax due to the area ground matching, the parallax is selected by the area-based matching method, and if the histogram having the maximum frequency is the histogram of the parallax due to the feature ground matching, Eighteenth Step of Selecting Parallax by Based Matching Method
Stereo image parallax map fusion method comprising a.
[6" claim-type="Currently amended] A three-dimensional image display method using a stereo image parallax map fusion method applied to a three-dimensional image display device,
Receiving a left and right image from the left and right image input devices, and extracting a left and right edge image from the received image;
Selecting a parallax according to characteristics of edges of the pixels and characteristics of the parallax map using an area-based matched parallax map and a feature-based matched parallax map obtained by inputting the left and right images and the left and right edge images;
A third step of recording the selected parallax into a digital image; And
Fourth step to visually display parallax maps in three-dimensional model
3D image display method using a stereo image parallax map fusion method comprising a.
[7" claim-type="Currently amended] The method of claim 6,
The parallax selection process of the second step,
A fifth step of initializing an index for indicating a position in the image;
A sixth step of checking whether the reference image pixel at the position indicated by the index value is an edge image;
A seventh step of performing edge pixel fusion, moving to a next position, and proceeding to the sixth step if it is an edge image as a result of the checking of the sixth step; And
As a result of the checking of the sixth step, if the edge image is not the eighth step to perform the non-edge pixel fusion and move to the next position to the sixth step
3D image display method using a stereo image parallax map fusion method comprising a.
[8" claim-type="Currently amended] The method of claim 7, wherein
The edge pixel fusion process of the seventh step,
A ninth step of checking whether the matching point of the target image by the region-based matching method and the matching point of the target image by the feature-based matching method are edge images;
A tenth step of selecting a parallax by a feature-based matching method when the matching point of the target image by the feature-based matching method is an edge image as a result of the checking of the ninth step;
An eleventh step of selecting a disparity by verifying validity of the disparity when the matching point of the target image by the region-based matching method is an edge image as a result of the checking of the ninth step;
As a result of the checking of the ninth step, when the edge image is not an edge image, the direction of the reference image pixel is checked to select a parallax based on the region-based matching method if the image is horizontal, and a parallax based on the feature-based matching method if the image is not horizontal. Twelfth step; And
As a result of the checking in the ninth step, if the edge image is selected, parallax is selected according to a matching point of the target image by the feature-based matching method, a matching point of the target image by the region-based matching method, and a direction of the reference image pixel. 13th step
3D image display method using a stereo image parallax map fusion method comprising a.
[9" claim-type="Currently amended] The method of claim 8,
The thirteenth step,
A fourteenth step of checking whether the matching point of the target image by the feature-based matching method and the reference point of the reference image pixel and the direction of the reference image pixel by the region-based matching method are the same;
A fifteenth step of selecting a parallax by a feature-based matching method when only the matching point of the target image and the reference image pixel are the same in direction as the result of the checking in the fourteenth step;
A step 16 of selecting a disparity by verifying validity of the disparity when the matching point of the target image and the reference image pixel are in the same direction as each other as a result of the checking of the fourteenth step;
As a result of the checking in step 14, when the matching point of the target image by the feature-based matching method and the reference point of the reference image pixel and the target image by the region-based matching method are different from the direction of the reference image pixel, the reference point Checking a direction of an image pixel to select a parallax based on the region-based matching method if horizontal, and selecting a parallax based on the feature-based matching method if not horizontal; And
As a result of the checking of step 14, when the matching point of the target image by the feature-based matching method and the reference image pixel and the direction of the reference image pixel by the region-based matching method are the same, the reference point An eighteenth step of checking a direction of an image pixel and selecting a parallax based on the region-based matching method if it is horizontal, and selecting a parallax based on the feature-based matching method if it is not horizontal
3D image display method using a stereo image parallax map fusion method comprising a.
[10" claim-type="Currently amended] The method according to claim 8 or 9,
The process of selecting the parallax by verifying the validity of the parallax,
Calculating a standard deviation between the mean parallax of the area-based matched parallax map and the parallax of the area-based matched parallax map in the predetermined area;
The standard deviation of the parallax of the area-based matching parallax map is compared with the absolute value difference between the parallax obtained by the area-based matching and the mean parallax in the predetermined area, and if the difference is greater than a first threshold, the feature-based matching method is performed. A twentieth step of selecting a parallax; And
Comparing the standard deviation of the parallax of the region-based matching parallax map with the absolute value difference between the parallax obtained by the region-based matching and the average parallax in the predetermined region, and if the difference is smaller than the first threshold, the region-based The parallax obtained by the area-based matching by comparing the difference between the absolute value of the parallax obtained by the matching and the mean parallax in the predetermined area with a smaller value among the standard deviation and the second threshold for the parallax of the area-based matching parallax map. A twenty-first step of selecting a parallax based on a region-based matching method if the absolute difference between the average parallaxes in the predetermined region is small;
3D image display method using a stereo image parallax map fusion method comprising a.
[11" claim-type="Currently amended] The method according to any one of claims 7 to 9,
In the eighth step of the non-edge pixel fusion process,
A nineteenth step of interpolating a parallax of a non-edge pixel of the feature-based matched parallax map;
Setting a predetermined area and calculating a histogram of parallax by region ground matching and a histogram of parallax by feature-based matching for pixels excluding edge pixels in the area;
A twenty-first step of examining a histogram having a maximum frequency among the histograms of the parallaxes by the region ground matching and the histograms of the parallaxes by the feature-based matching; And
If the histogram having the maximum frequency is the histogram of the parallax due to the area ground matching, the parallax is selected based on a region-based matching method; and if the histogram having the maximum frequency is the histogram of the parallax due to the feature ground matching The twenty-second step of selecting parallax by the based matching method
3D image display method using a stereo image parallax map fusion method comprising a.
[12" claim-type="Currently amended] In a video parallax map fusion device having a processor,
A first function of initializing an index for indicating a position within an image;
A second function of checking whether a reference image pixel at a position indicated by the index value is an edge image;
A third function of performing edge pixel fusion, moving to a next position, and proceeding to the second function if an edge image is confirmed by the second function; And
A fourth function for performing non-edge pixel fusion, moving to the next position and proceeding to the second function if it is not an edge image as a result of the confirmation by the second function;
A computer-readable recording medium having recorded thereon a program for realizing this.
[13" claim-type="Currently amended] In a three-dimensional image display device having a processor,
A first function of receiving left and right images from left and right image input apparatuses and extracting left and right edge images from the received images;
A second function of selecting a parallax according to characteristics of an edge of a pixel and a parallax map using an area-based matching parallax map and a feature-based matching parallax map obtained by inputting the left and right images and the left and right edge images;
A third function of adding the selected disparity and recording the digital image; And
Fourth function to image parallax maps in three-dimensional model
A computer-readable recording medium having recorded thereon a program for realizing this.
类似技术:
公开号 | 公开日 | 专利标题
CN104685513B|2018-04-27|According to the high-resolution estimation of the feature based of the low-resolution image caught using array source
US8867793B2|2014-10-21|Scene analysis using image and range data
US8929602B2|2015-01-06|Component based correspondence matching for reconstructing cables
KR101537174B1|2015-07-15|Method for extracting salient object from stereoscopic video
US10373380B2|2019-08-06|3-dimensional scene analysis for augmented reality operations
Mičušík et al.2010|Multi-view superpixel stereo in urban environments
TWI485650B|2015-05-21|Method and arrangement for multi-camera calibration
Deng et al.2001|Unsupervised segmentation of color-texture regions in images and video
JP2015181042A|2015-10-15|detection and tracking of moving objects
EP2426642B1|2015-09-30|Method, device and system for motion detection
JP3539788B2|2004-07-07|Image matching method
JP5249221B2|2013-07-31|Method for determining depth map from image, apparatus for determining depth map
JP2891159B2|1999-05-17|Object detection method from multi-view images
EP1806697B1|2016-08-10|Segmenting image elements
US6671399B1|2003-12-30|Fast epipolar line adjustment of stereo pairs
KR100450793B1|2004-10-01|Apparatus for object extraction based on the feature matching of region in the segmented images and method therefor
JP4865557B2|2012-02-01|Computer vision system for classification and spatial localization of bounded 3D objects
Meier et al.1998|Automatic segmentation of moving objects for video object plane generation
EP0686942B1|1999-10-13|Stereo matching method and disparity measuring method
EP1958149B1|2012-01-18|Stereoscopic image display method and apparatus, method for generating 3d image data from a 2d image data input and an apparatus for generating 3d image data from a 2d image data input
KR100459893B1|2004-12-04|Method and apparatus for color-based object tracking in video sequences
Zhang et al.2010|Robust bilayer segmentation and motion/depth estimation with a handheld camera
CA2326816C|2005-04-05|Face recognition from video images
JP2915894B2|1999-07-05|Target tracking method and device
US6891966B2|2005-05-10|Method for forming a depth image from digital image data
同族专利:
公开号 | 公开日
KR100411875B1|2003-12-24|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
法律状态:
2001-06-15|Application filed by 한국전자통신연구원
2001-06-15|Priority to KR10-2001-0033943A
2002-12-28|Publication of KR20020095752A
2003-12-24|Application granted
2003-12-24|Publication of KR100411875B1
优先权:
申请号 | 申请日 | 专利标题
KR10-2001-0033943A|KR100411875B1|2001-06-15|2001-06-15|Method for Stereo Image Disparity Map Fusion And Method for Display 3-Dimension Image By Using it|
[返回顶部]